Data Ingestion¶
Right click to download this notebook from GitHub.
Machine learning tasks are typically data heavy, requiring labelled data for supervised learning or unlabelled data for unsupervised learning. In Python, data is typically stored in memory as NumPy arrays at some level, but in most cases you can use higher-level containers built on top of NumPy that are more convenient for tabular data (Pandas), multidimensional gridded data (xarray), or out-of-core and distributed data (Dask).
Each of these libraries allows reading local data in a variety of formats. In many cases the required datasets are large and stored on remote servers, so we will show how to use the Intake library to fetch remote datasets efficiently, including built-in caching to avoid unncessary downloads when the files are available locally.
To ensure that you understand the properties of your data and how it gets transformed at each step in the workflow, we will use exploratory visualization tools as soon as the data is available and at every subsequent step.
Once you have loaded your data, you will typically need to reshape it appropriately before it can be fed into a machine learning pipeline. Those steps will be detailed in the next tutorial: Alignment and Preprocessing.
Inline loading¶
We'll start with the simple case of loading small local datasets, such as a .csv file for Pandas:
import pandas as pd
training_df = pd.read_csv('../data/landsat5_training.csv')
We can inspect the first several lines of the file using .head
, or a random set of rows using .sample(n)
training_df.head()
To get a better sense of how this dataframe is set up, we can look at .info()
training_df.info()
To use methods like pd.read_csv
, the data all needs to be on the local filesystem (or on one of the limited remote specification formats supported by Pandas, such as S3). We could of course put in various commands here to fetch a file explicitly from a remote server, but the notebook would then very quickly get complex and unreadable.
Instead, for larger datasets, we can automate those steps using intake so that remote and local data can be treated similarly.
import intake
training = intake.open_csv('../data/landsat5_training.csv')
To get better insight into the data without loading it all in just yet, we can inspect the data using .to_dask()
training_dd = training.to_dask()
training_dd.head()
training_dd.info()
To get a full pandas.DataFrame object, use .read()
to load in all the data.
training_df = training.read()
training_df.info()
NOTE: There are different items in these two info views which reflect what is knowable before and after we read all the data. For instance, it is not possible to know the shape
of the whole dataset before it is loaded.
Loading multiple files¶
In addition to allowing partitioned reading of files, intake lets the user load and concatenate data across multiple files in one command
training = intake.open_csv(['../data/landsat5_training.csv', '../data/landsat8_training.csv'])
training_df = training.read()
training_df.info()
NOTE: The length of the dataframe has increased now that we are loading multiple sets of training data.
This can be more simply expressed as:
training = intake.open_csv('../data/landsat*_training.csv')
Sometimes, there is data encoded in a file name or path that causes concatenated data to lose some important context. In this example, we lose the information about which version of landsat the training was done on. To keep track of that information, we can use a python format string to specify our path and declare a new field on our data. That field will get populated based on its value in the path.
training = intake.open_csv('../data/landsat{version:d}_training.csv')
training_df = training.read()
training_df.head()
# Exercise: Try looking at the tail of the data using training_df.tail(), or a random sample using training_df.sample(5)
Using Catalogs¶
For more complicated setups, we use the file catalog.yml to declare how the data should be loaded. The catalog lays out how the data should be loaded, defines some metadata, and specifies any patterns in the file path that should be included in the data. Here is an example of a catalog entry:
sources:
landsat_5_small:
description: Small version of Landsat 5 Surface Reflectance Level-2 Science Product.
driver: rasterio
cache:
- argkey: urlpath
regex: 'earth-data/landsat'
type: file
args:
urlpath: 's3://earth-data/landsat/small/LT05_L1TP_042033_19881022_20161001_01_T1_sr_band{band:d}.tif'
chunks:
band: 1
x: 50
y: 50
concat_dim: band
storage_options: {'anon': True}
The urlpath
can be a path to a file, list of files, or a path with glob notation. Alternatively the path can be written as a python style format_string. In the case where the urlpath
is a format string, the fields specified in that string will be parsed from the filenames and returned in the data.
cat = intake.open_catalog('../catalog.yml')
list(cat)
# Exercise: Read the description of the landsat_5_small data source using cat.landsat_5_small.description
NOTE: If you don't have the data cached yet, then the next cell will take a few seconds.
landsat_5 = cat.landsat_5_small
landsat_5.to_dask()
The data has not yet been loaded so we don't have access to the actual data values yet, but we do have access to coordinates and metadata.
import hvplot.intake
intake.output_notebook()
import holoviews as hv
hv.extension('bokeh')
We can quickly generate a plot of each of the landsat bands using the overview plot declared in the catalog. Here is the relevant part of catalog.yml
:
metadata:
plots:
band_image:
kind: 'image'
x: 'x'
y: 'y'
groupby: 'band'
rasterize: True
width: 400
dynamic: False
landsat_5.hvplot.band_image()
Accessing the data¶
So far we have been looking at the intake data entry object landsat_5
. To access the data on this object we will read the data. If the data are big, we can use dask to do this using the .to_dask()
method to create a dask xarray.DataArray
. If the data are small, then we can use the read()
method to read all the data straight into a regular xarray.DataArray
. Once in an xarray
object the data can be more easily manipulated and visualized.
type(landsat_5)
Xarray DataArray¶
To get an xarray
object, we'll use the .read()
method.
landsat_5_xda = landsat_5.read()
type(landsat_5_xda)
We can use tab completion to explore what other information is stored on our xarray.DataArray object. We can use tab completion to explore attributes and methods available on our object.
# Exercise: Try typing landsat_5_xda. and press [tab] - don't forget the trailing dot!
Numpy Array¶
Machine Learning pipelines such as scikit-learn accept Numpy arrays as input. These arrays are accessible in DataArray objects on the values
attribute.
landsat_5_npa = landsat_5_xda.values
type(landsat_5_npa)
Next:¶
Now that you have loaded your data, you will typically need to reshape it appropriately before it can be fed into a machine-learning pipeline. These steps are detailed in the next tutorial: Alignment and Preprocessing.