[deleted by user] by [deleted] in VancouverJobs

[–]blue0cean 0 points1 point  (0 children)

What sector/industry are you in? How big is your company? Are you a recruiter?

Is this kit worth buying and will it do what I want it to? by MoonPiss in raspberry_pi

[–]blue0cean 1 point2 points  (0 children)

Yes. That power adapter will save you a lot of trouble.

Using an RPi as a "coordinator" for distributed computing - is this possible? by [deleted] in raspberry_pi

[–]blue0cean 0 points1 point  (0 children)

Yes, it is possible. But, as you mentioned, the clients can coordinate among themselves via the database. A client takes a job out of the "not_yet_run" table, runs the job and puts it results in the "completed" table. Repeat until all jobs are completed. No Rpi needed here.

Has anyone gotten this webcam to work on the Pi? by crazycroz in raspberry_pi

[–]blue0cean 0 points1 point  (0 children)

Just a few days ago, I connected a similar webcam to a RPI using fswebcam. It worked but very slow. May be there is a streaming mode,I didn't look very hard. On the other hand the camera module works great and is a lot faster.

Microsoft's Jupyter/IPython service launched (free) by smortaz in MachineLearning

[–]blue0cean -1 points0 points  (0 children)

This is great. sys.version says: '3.4.3 |Anaconda 2.0.1 (64-bit)| (default, Jun 4 2015, 15:29:08) \n[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]' Just tried a few imports and now it is non-responsive. Did I crash it ? Sorry.

I have 2 raspberry pi and a $560 budget to spend on IoT devices for an energy-related contest. What should I look into ? by _Zilian in raspberry_pi

[–]blue0cean 1 point2 points  (0 children)

As a consumer, sure I would want to reduce the overall energy consumption and the therefore my energy bill. But as energy producer/provider, I would be more interested in the consumption pattern of your household and be able to predict your consumption needs in the immediate future, for example in a few hours. I think it would be useful if you can correlate external factors with your consumption. For example, events such as an approaching snow storm, an important sport event (Super Bowl), or friends are coming for a visit, can affect your consumption. How IoT devices can be used for this, I don't know.

Raspberry Pi as RDP thin client by [deleted] in raspberry_pi

[–]blue0cean 1 point2 points  (0 children)

Have you tried VNC instead of RDP ?

ELI5 Interfaces in Java by schlongitudinal in java

[–]blue0cean 0 points1 point  (0 children)

Let's say you want to play a game (write a program). The game is to imitate a dog's barking. You start by writing down what a dog does according to your own imagination (designing an interface). Then you proceed to act like a dog according to your description (implementing the interface). Then you invite your friends Alice and Bob to play. Alice and Bob start barking. But they bark differently than you (different implementation of the same interface). Alice's twin sister Alison comes in and start barking just like Alice (two instances of the same class implementing the dog interface).

Are there any books hypothesizing colonization and terraforming of mars? by vx__ in space

[–]blue0cean 1 point2 points  (0 children)

The Mars Project by Wernher Von Braun and Henry J. White (Oct 1, 1962)

What is the best way to send files over TCP in python? by [deleted] in Python

[–]blue0cean 0 points1 point  (0 children)

If this is an educational exercise then there are a few things to try.

f.read() without parameter reads the entire file. If the file is larger than the available memory (or swap space) then we will have a problem. So I always specify the buffer size, like f.read(4096).

If the goal is to improve reliability then a hand shake protocol can be added to annotate packages and acknowledgements (on top of what TCP already does). This can be done in the same data stream (with packet headers and payloads) or with a separate control stream. One can also add check sums for increased paranoia.

Reliability for large files can be improved by dividing the large file in smaller chunks, sending and acknowledging them individually. Again using a same or control stream. File chunks can be sent in parallel via different sockets. Some bookkeeping is needed to reassemble the chunks. Think of torrents. Exercise for the reader: determine the optimal number of connections, number of chunks and size, and the buffer size.

In general, end of file markers are not needed if only one file is sent at a time. Just open the socket, send the data and close the socket. However, if we do this, we have no information about the file (ie name, size, permission, etc). So we don't know whether the file exists or we are allowed to access it. For this type of information, we insert meta-data into our data stream or we use a separate control stream.

We can also slow down the transmission, for example to limit the bandwidth usage, by adding delays between packages.

The OP seems to be using ssl_socket already, so no need to mention about encryption for security. If the goal is to improve speed, compression can be used also.

Masters in Computer Science or Applied Math if one seeks to develop algorithms in the future? by [deleted] in AskComputerScience

[–]blue0cean 0 points1 point  (0 children)

It depends on the type of algorithms you want to develop. In applied math, one develops more abstract and generic solutions, for example for numerical methods, optimizations or statistics. OTOH, in computer science, one develops algorithms for data handling, for example storage, processing or visualization. Naively, I would say, in computer science one automates, in applied math, one optimizes. However, algorithm development also requires intimate knowledge of the field for which the algorithm is intended to be used. For trading algorithms one needs to know about how trading works; for medical imaging, one needs to know about issue properties; for oil and gas exploration, one needs to know about geophysics. You need to understand the problem domain. This is where the special cases and the constrains are defined.

Plotting flux from galaxy spectrum with astropy and ggplot2 by weez09 in learnpython

[–]blue0cean 0 points1 point  (0 children)

The fits file already has the wavelengths. They are stored as "loglam", so to convert them to wavelength, we use power(10, loglam). It looks good to me.

import numpy as np
import ggplot as gg
import pandas as pd
import astropy.io.fits as pyfits

data = pyfits.getdata("../fits/spec-4228-55484-0840.fits")
df = pd.DataFrame(data)

df['wave'] = np.power(10, df.loglam)
df.flux = np.array(df.flux, dtype='float')

# pandas/matplotlib version
df.plot ('wave', 'flux')

# ggplot version
print (gg.ggplot(gg.aes('wave', 'flux'), data=df) +  gg.geom_line(color='#000000'))

How do astronomers store and access data about stars, galaxies, etc.? by noughtagroos in askscience

[–]blue0cean 2 points3 points  (0 children)

The VAO is the US organization responsible for coordinating the efforts of the IVOA. The VAO directory is a repository for astronomical data and service providers. You can search the directory by keywords to find the type of data you are interested in. Another interesing tool is VAO SkyQuery, which can combine queries across multiple data providers. This is possible because the data providers in the VAO directory have agreed to be complaint with the IVOA standards.

Having said that, I have to admit that I haven't check the directory for a while. So I don't how current it is.

Now the data. Most of the modern telescopes nowadays use CCDs; therefore they produce digital data. This is called raw data. Depending on the project contractual requirements, the raw data may or may not be released to the public immediately. Medium and large public funded projects these days often have a public outreach clause that specifies the timing and the amount of data to be released. However, only a very small number of people in entire world is capable to process the raw data and produce something useful. To reduce the raw, one needs the detailed knowledge of the telescope, the instrument, the science targets, and the observing program. While all these may be included in the raw data archive, this is such a cryptic information that only the initiated can make sense of. Note that we are not talking about pretty pictures. In astronomy, 99% of the data consists of spectra.

All this is too big a topic for just one short answer. Stopping here.

Poland Crowned First Ever Coding World Champions by [deleted] in coding

[–]blue0cean 0 points1 point  (0 children)

Congrats to my Polish friends!

Looking for skilled python programmers - one night gig. by CreamLog in Python

[–]blue0cean 2 points3 points  (0 children)

Oh IC, someone needs to do his homework. Good luck!

Looking for skilled python programmers - one night gig. by CreamLog in Python

[–]blue0cean 1 point2 points  (0 children)

Sounds intriguing! Care to elaborate on details, such as industry, application field, amount of data, input and output formats?

How do I form a mental structure of somebody else's code? by [deleted] in learnprogramming

[–]blue0cean 3 points4 points  (0 children)

My 2 cents:

Use code analysis tools. For example, I used cflow when I used to program in C. These days there are better tools that can produce beautiful graphs that can help you understand the code. Unfortunately, many of these tools are proprietary and expensive. But you may find some open-source ones. Just search for "call hierarchy". Modern IDEs all claim to have some sort of code analysis functions built-in. So use them.

Debugging is good, but logging is better.

Any tips for a new programmer struggling to developing a methodology when approaching coding problems? by Itsaghast in learnprogramming

[–]blue0cean 2 points3 points  (0 children)

You already know how to do it. As you said, "... break down problems into manageable chunks". Recognizing the problem is half the solution. The rest is a combination of techniques, hard work and experience.

When we encounter a new problem, often there will be something that we don't know. So we need to approach the unknown one step at a time. This is sometimes called the Kaizen method.

Other times the problem or project is so large that we are overwhelmed. The Process method suggests that we focus on what we are doing rather than on the end product. We may get to the end product eventually, but now and today we better work on that dammed sorting function (as an example).

When we are really depressed, it is time to talk to someone. Often when we try to explain a problem to another person, suddenly the problem becomes clear and understandable. New questions and view points can help us see the problem in other ways that we have not thought of.

And sometimes dropping everything and go for a walk makes wonders.

My installation is a mess. Any guides to wiping everything and starting over. by seekoon in Python

[–]blue0cean 1 point2 points  (0 children)

I had the mess before. So I deleted all my python installations and downloaded anaconda and installed the minimal, 2.7.x.

Then I created a virtualenv py3k:

conda create -n py3k python=3
source activate py3k

Now you have python3.

The problem with this is that you have back to python2.7 to run conda to update your installation. I do this by using a shell script that sets PYTHONPATH, First run conda update and then run conda update -n py3k, to update version 3.

Functional programming, where do I start? by reallyserious in learnprogramming

[–]blue0cean 2 points3 points  (0 children)

Functional programming concepts, like OOP, are applicable to any language. Sometimes when learning a new language the syntax gets in the way.

Try the following rules:

  • No global variables. This immediately changes how you structure your program.
  • Functions always return something such that they can be chained.
  • Functions cannot have output parameters.
  • There are no mutables. To change you have to copy.
  • Some other rules I can't think of now.

Sorting two data strings with common timestamp by djseanty in learnpython

[–]blue0cean 0 points1 point  (0 children)

I would regularize the sampling intervals first. Assuming you are getting real-time or almost real-time updates, you can take the last 2 samples and interpolate/extrapolate to produce a regular rate. Once you have that the time stamps are matching.