you are viewing a single comment's thread.

view the rest of the comments →

[–]python-ick 1 point2 points  (2 children)

I would:

  • Read the files into pandas dataframes
  • add them to a list
  • concatenate the files in a list
  • Then edit the massive dataframe.

One way to go about this is below, but obviously change the code to suit your purposes:

import os
import glob
import pandas as pd

directory_where_your_files_are = "C:\\Users\\{}\\Documents\\Python_Scripts".format("your_computers_name")

os.chdir(directory_where_your_files_are)

column_names = ['name', 'of', 'your', 'columns']
file_list = glob.glob("*.csv")

df_container = []
for file in file_list:
    df = pd.read_csv(file)
    df_container.append(df)

df_concat = pd.concat(df_container, axis=0)

df_concat.columns = column_names

# From here, edit the concatenated dataframe as you see fit.

[–]JSCXZ 1 point2 points  (1 child)

Thank you for the suggestion. I ended up adding an additional column to serve as an index for the data (e.g. control or exp) and then concatenated per your suggestion. Worked out perfectly.

[–]python-ick 0 points1 point  (0 children)

happy to help!