⚠️ IMPORTANT NOTICE ⚠️
This site is ARCHIVED and will NO LONGER BE UPDATED.
For updated Tutorial material, please visit the Pythia landsat-ml cookbook.
For Topic Examples, head over to the HoloViz Examples website.

Data Ingestion

Right click to download this notebook from GitHub.


Machine learning tasks are typically data heavy, requiring labelled data for supervised learning or unlabelled data for unsupervised learning. In Python, data is typically stored in memory as NumPy arrays at some level, but in most cases you can use higher-level containers built on top of NumPy that are more convenient for tabular data (Pandas), multidimensional gridded data (xarray), or out-of-core and distributed data (Dask).

Each of these libraries allows reading local data in a variety of formats. In many cases the required datasets are large and stored on remote servers, so we will show how to use the Intake library to fetch remote datasets efficiently, including built-in caching to avoid unncessary downloads when the files are available locally.

To ensure that you understand the properties of your data and how it gets transformed at each step in the workflow, we will use exploratory visualization tools as soon as the data is available and at every subsequent step.

Once you have loaded your data, you will typically need to reshape it appropriately before it can be fed into a machine learning pipeline. Those steps will be detailed in the next tutorial: Alignment and Preprocessing.

Inline loading

We'll start with the simple case of loading small local datasets, such as a .csv file for Pandas:

In [1]:
import pandas as pd

training_df = pd.read_csv('../data/landsat5_training.csv')

We can inspect the first several lines of the file using .head, or a random set of rows using .sample(n)

In [2]:
training_df.head()
Out[2]:
image type easting northing red green blue nir ndvi bn bnn
0 LT05_L1TP_042033_19881022_20161001_01_T1 water 348586.0 4286269.0 182 351 319 130 -0.166667 2.453846 -0.420935
1 LT05_L1TP_042033_19881022_20161001_01_T1 water 338690.0 4323890.0 620 656 527 433 -0.177588 1.217090 -0.097917
2 LT05_L1TP_042033_19881022_20161001_01_T1 veg 345930.0 4360830.0 358 506 272 5411 0.875888 0.050268 0.904276
3 LT05_L1TP_042033_19881022_20161001_01_T1 veg 344490.0 4363590.0 343 639 374 5826 0.888799 0.064195 0.879355
4 LT05_L1TP_042033_19881022_20161001_01_T1 veg 346410.0 4360620.0 360 611 325 5405 0.875108 0.060130 0.886562

To get a better sense of how this dataframe is set up, we can look at .info()

In [3]:
training_df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 22 entries, 0 to 21
Data columns (total 11 columns):
image       22 non-null object
type        22 non-null object
easting     22 non-null float64
northing    22 non-null float64
red         22 non-null int64
green       22 non-null int64
blue        22 non-null int64
nir         22 non-null int64
ndvi        22 non-null float64
bn          22 non-null float64
bnn         22 non-null float64
dtypes: float64(5), int64(4), object(2)
memory usage: 2.0+ KB

To use methods like pd.read_csv, the data all needs to be on the local filesystem (or on one of the limited remote specification formats supported by Pandas, such as S3). We could of course put in various commands here to fetch a file explicitly from a remote server, but the notebook would then very quickly get complex and unreadable.

Instead, for larger datasets, we can automate those steps using intake so that remote and local data can be treated similarly.

In [4]:
import intake

training = intake.open_csv('../data/landsat5_training.csv')

To get better insight into the data without loading it all in just yet, we can inspect the data using .to_dask()

In [5]:
training_dd = training.to_dask()
training_dd.head()
Out[5]:
image type easting northing red green blue nir ndvi bn bnn
0 LT05_L1TP_042033_19881022_20161001_01_T1 water 348586.0 4286269.0 182 351 319 130 -0.166667 2.453846 -0.420935
1 LT05_L1TP_042033_19881022_20161001_01_T1 water 338690.0 4323890.0 620 656 527 433 -0.177588 1.217090 -0.097917
2 LT05_L1TP_042033_19881022_20161001_01_T1 veg 345930.0 4360830.0 358 506 272 5411 0.875888 0.050268 0.904276
3 LT05_L1TP_042033_19881022_20161001_01_T1 veg 344490.0 4363590.0 343 639 374 5826 0.888799 0.064195 0.879355
4 LT05_L1TP_042033_19881022_20161001_01_T1 veg 346410.0 4360620.0 360 611 325 5405 0.875108 0.060130 0.886562
In [6]:
training_dd.info()
<class 'dask.dataframe.core.DataFrame'>
Columns: 11 entries, image to bnn
dtypes: object(2), float64(5), int64(4)

To get a full pandas.DataFrame object, use .read() to load in all the data.

In [7]:
training_df = training.read()
training_df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 22 entries, 0 to 21
Data columns (total 11 columns):
image       22 non-null object
type        22 non-null object
easting     22 non-null float64
northing    22 non-null float64
red         22 non-null int64
green       22 non-null int64
blue        22 non-null int64
nir         22 non-null int64
ndvi        22 non-null float64
bn          22 non-null float64
bnn         22 non-null float64
dtypes: float64(5), int64(4), object(2)
memory usage: 2.0+ KB

NOTE: There are different items in these two info views which reflect what is knowable before and after we read all the data. For instance, it is not possible to know the shape of the whole dataset before it is loaded.

Loading multiple files

In addition to allowing partitioned reading of files, intake lets the user load and concatenate data across multiple files in one command

In [8]:
training = intake.open_csv(['../data/landsat5_training.csv', '../data/landsat8_training.csv'])
In [9]:
training_df = training.read()
training_df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 50 entries, 0 to 27
Data columns (total 11 columns):
image       50 non-null object
type        50 non-null object
easting     50 non-null float64
northing    50 non-null float64
red         50 non-null int64
green       50 non-null int64
blue        50 non-null int64
nir         50 non-null int64
ndvi        50 non-null float64
bn          50 non-null float64
bnn         50 non-null float64
dtypes: float64(5), int64(4), object(2)
memory usage: 4.7+ KB

NOTE: The length of the dataframe has increased now that we are loading multiple sets of training data.

This can be more simply expressed as:

In [10]:
training = intake.open_csv('../data/landsat*_training.csv')

Sometimes, there is data encoded in a file name or path that causes concatenated data to lose some important context. In this example, we lose the information about which version of landsat the training was done on. To keep track of that information, we can use a python format string to specify our path and declare a new field on our data. That field will get populated based on its value in the path.

In [11]:
training = intake.open_csv('../data/landsat{version:d}_training.csv')
training_df = training.read()
training_df.head()
Out[11]:
image type easting northing red green blue nir ndvi bn bnn version
0 LT05_L1TP_042033_19881022_20161001_01_T1 water 348586.0 4286269.0 182 351 319 130 -0.166667 2.453846 -0.420935 5
1 LT05_L1TP_042033_19881022_20161001_01_T1 water 338690.0 4323890.0 620 656 527 433 -0.177588 1.217090 -0.097917 5
2 LT05_L1TP_042033_19881022_20161001_01_T1 veg 345930.0 4360830.0 358 506 272 5411 0.875888 0.050268 0.904276 5
3 LT05_L1TP_042033_19881022_20161001_01_T1 veg 344490.0 4363590.0 343 639 374 5826 0.888799 0.064195 0.879355 5
4 LT05_L1TP_042033_19881022_20161001_01_T1 veg 346410.0 4360620.0 360 611 325 5405 0.875108 0.060130 0.886562 5
In [12]:
# Exercise: Try looking at the tail of the data using training_df.tail(), or a random sample using training_df.sample(5)

Using Catalogs

For more complicated setups, we use the file catalog.yml to declare how the data should be loaded. The catalog lays out how the data should be loaded, defines some metadata, and specifies any patterns in the file path that should be included in the data. Here is an example of a catalog entry:

sources:
  landsat_5_small:
    description: Small version of Landsat 5 Surface Reflectance Level-2 Science Product.
    driver: rasterio
    cache:
      - argkey: urlpath
        regex: 'earth-data/landsat'
        type: file
    args:
      urlpath: 's3://earth-data/landsat/small/LT05_L1TP_042033_19881022_20161001_01_T1_sr_band{band:d}.tif'
      chunks:
        band: 1
        x: 50
        y: 50
      concat_dim: band
      storage_options: {'anon': True}

The urlpath can be a path to a file, list of files, or a path with glob notation. Alternatively the path can be written as a python style format_string. In the case where the urlpath is a format string, the fields specified in that string will be parsed from the filenames and returned in the data.

In [13]:
cat = intake.open_catalog('../catalog.yml')
list(cat)
Out[13]:
['landsat_5_small',
 'landsat_8_small',
 'landsat_5',
 'landsat_8',
 'google_landsat_band',
 'amazon_landsat_band',
 'fluxnet_daily',
 'fluxnet_metadata',
 'seattle_lidar']
In [14]:
# Exercise: Read the description of the landsat_5_small data source using cat.landsat_5_small.description

NOTE: If you don't have the data cached yet, then the next cell will take a few seconds.

In [15]:
landsat_5 = cat.landsat_5_small
landsat_5.to_dask()
Out[15]:
<xarray.DataArray (band: 6, y: 300, x: 300)>
dask.array<shape=(6, 300, 300), dtype=float64, chunksize=(1, 50, 50)>
Coordinates:
  * y        (y) float64 4.309e+06 4.309e+06 4.309e+06 ... 4.264e+06 4.264e+06
  * x        (x) float64 3.324e+05 3.326e+05 3.327e+05 ... 3.771e+05 3.772e+05
  * band     (band) int64 1 2 3 4 5 7
Attributes:
    transform:   (150.0, 0.0, 332325.0, 0.0, -150.0, 4309275.0)
    crs:         +init=epsg:32611
    res:         (150.0, 150.0)
    is_tiled:    0
    nodatavals:  (nan,)

The data has not yet been loaded so we don't have access to the actual data values yet, but we do have access to coordinates and metadata.

Visualizing the data

To get a quick sense of the data, we can plot it using hvPlot, which provides interactive plotting commands for Intake, Pandas, XArray, Dask, and GeoPandas. We'll look more closely at hvPlot and its options in later tutorials.

In [16]:
import hvplot.intake
intake.output_notebook()

import holoviews as hv
hv.extension('bokeh')

We can quickly generate a plot of each of the landsat bands using the overview plot declared in the catalog. Here is the relevant part of catalog.yml:

metadata:
  plots:
    band_image:
      kind: 'image'
      x: 'x'
      y: 'y'
      groupby: 'band'
      rasterize: True
      width: 400
      dynamic: False
In [17]:
landsat_5.hvplot.band_image()
Out[17]:

Accessing the data

So far we have been looking at the intake data entry object landsat_5. To access the data on this object we will read the data. If the data are big, we can use dask to do this using the .to_dask() method to create a dask xarray.DataArray. If the data are small, then we can use the read() method to read all the data straight into a regular xarray.DataArray. Once in an xarray object the data can be more easily manipulated and visualized.

In [18]:
type(landsat_5)
Out[18]:
intake.catalog.local.LocalCatalogEntry

Xarray DataArray

To get an xarray object, we'll use the .read() method.

In [19]:
landsat_5_xda = landsat_5.read()
type(landsat_5_xda)
Out[19]:
xarray.core.dataarray.DataArray

We can use tab completion to explore what other information is stored on our xarray.DataArray object. We can use tab completion to explore attributes and methods available on our object.

In [20]:
# Exercise: Try typing landsat_5_xda. and press [tab] - don't forget the trailing dot!

Numpy Array

Machine Learning pipelines such as scikit-learn accept Numpy arrays as input. These arrays are accessible in DataArray objects on the values attribute.

In [21]:
landsat_5_npa = landsat_5_xda.values
type(landsat_5_npa)
Out[21]:
numpy.ndarray

Next:

Now that you have loaded your data, you will typically need to reshape it appropriately before it can be fed into a machine-learning pipeline. These steps are detailed in the next tutorial: Alignment and Preprocessing.