Loading a ~200 column CSV file in Haskell into a Record structure
I am in the process of replicating some code in Python in Haskell.
Python, I load a couple of csv files, each file having more than 100
columns into a Pandas' data frame. Panda's data-frame, in short is a
tabular structure which lets me performs on bunch of joins, and filter
out data. I generated different shapes of reports using these
operations. Of course, I would love some type checking to help me with
these merge, join operations as I create different reports.
not looking to replicate the Pandas data-frame functionality in
Haskell. First thing I want to do is reach out to the 'record' data
structure. Here are some ideas I have:
1. I need to declare all these 100+ columns into multiple record structures.
Some of the columns can have NULL/NaN values. Therefore, some of the
attributes of the record structure would be 'MayBe' values. Now, I could
drop some columns during load and cut down the number of attributes i
created per record structure.
3. Create a dictionary of each record structure which will help me index into into them.'
would like some feedback on the first 2 points. Seems like there is a
lot of boiler plate code I have to generate for creating 100s of record
attributes. Is this the only sane way to do this? What other patterns
should I consider while solving such a problem.
Also, I do not want to add too many dependencies into the project, but open to suggestions.