Machine Learning¶
Right click to download this notebook from GitHub.
With the data preparation complete, this step will demonstrate how you can configure a scikit-learn or dask_ml pipeline, but any library, algorithm, or simulator could be used at this stage if it can accept array data. In the next step of the tutorial, Data Visualization you will learn how to visualize the output of this pipeline and diagnose as well as ensure that the inputs to the pipeline have the expected structure.
import intake
import numpy as np
import xarray as xr
import holoviews as hv
import cartopy.crs as ccrs
import geoviews as gv
import hvplot.xarray
hv.extension('bokeh', width=80)
Recap: Loading data¶
Note in this tutorial we will use the small version of the landsat data for time constraints. If you prefer to work with full-scale data, use cat.landsat_4.read_chunked()
instead.
cat = intake.open_catalog('../catalog.yml')
landsat_5_da = cat.landsat_5_small.read_chunked()
landsat_5_da.shape
Reshaping Data¶
We'll need to reshape the image to be how dask-ml / scikit-learn expect it: (n_samples, n_features)
where n_features is the number of bands and n_samples is the total number of pixels in each band. Essentially, we'll be creating a bag of pixels out of each image, where each pixel has multiple features (bands), but the ordering of the pixels is no longer relevant. In this case we start with an array that is n_bands by n_y by n_x (6, 300, 300) and we need to reshape to an array that is (n_samples, n_features)
(9e4, 6). We'll first look at using NumPy, then Xarray.
Numpy¶
Data can be reshaped at the lowest level using NumPy, by getting the underlying values from the xarray.DataArray
, and using flatten and transpose to get the right shape.
arr = landsat_5_da.values
arr.shape
Since we want to flatten along the x and y but not along the band axis, we need to iterate over each band and flatten the data.
flattened_npa = np.array([arr[i].flatten() for i in range(arr.shape[0])])
flattened_npa
flattened_npa.shape
To get our flattened into the shape of n_samples, n_features, we'll reorder the dimensions using .transpose
flattened_t_npa = flattened_npa.transpose()
flattened_t_npa.shape
Since numpy.array
s are not labeled data, the semantics of the data are lost over the course of these operations, as the necessary metadata does not exist at the NumPy level.
Xarray¶
By using xarray
methods to flatten the data, we can keep track of the coordinate labels ('x' and 'y') along the way. This means that we have the ability to reshape back to our original array at any time with no information loss.
flattened_xda = landsat_5_da.stack(z=('x','y'))
flattened_xda
We can reorder the dimensions using DataArray.transpose
:
flattened_t_xda = flattened_xda.transpose('z', 'band')
flattened_t_xda.shape
Rescaling Data¶
Rescale (standardize) the data to input to the algorithm since the ML pipeline that we have selected expects input values to be small.
Here we'll demonstrate doing this in numpy
or xarray
.
(flattened_t_npa - flattened_t_npa.mean()) / flattened_t_npa.std()
rescaled = (flattened_t_xda - flattened_t_xda.mean()) / flattened_t_xda.std()
rescaled.compute()
NOTE: Since the the xarray object is in dask, the actual computation isn't performed until .compute()
is called.
# Exercise: Inspect the numpy array at rescaled.values to check that it matches the numpy array above. You could use == for this with .all
Side-note: Other preprocessing¶
Although this isn't the case in this instance, sometimes to get the data into the right shape you need to add or remove axes. Here is an example of adding an axis with numpy
and with xarray
.
np.expand_dims(flattened_t_npa, axis=2).shape
flattened_t_xda.expand_dims(dim='e', axis=2).shape
# Exercise: Try removing the extra axis using np.squeeze or .squeeze on the xarray object
ML pipeline¶
The Machine Learning pipeline shown below is just for the purpose of understanding the shaping/reshaping of the data. In practice you will likely be using a more sophisticated pipeline. Here we will use a version of SpectralClustering from dask_ml that is a scalable equivalent to operations from Scikit-learn that cluster pixels based on similarity (across all bands, which makes it spectral clustering by spectra!).
from dask_ml.cluster import SpectralClustering
from dask.distributed import Client
client = Client(processes=False)
client
Now we will compute and persist the rescaled data to feed into the ML pipeline. Notice that X has the shape: n_samples, n_features
as discussed above.
X = client.persist(rescaled)
X.shape
First we will set up the model with the number of clusters, and other options.
clf = SpectralClustering(n_clusters=4, random_state=0, gamma=None,
kmeans_params={'init_max_iter': 5},
persist_embedding=True)
This is the slow part. Then we'll fit the model to out data X
. This is the part that will take a noticeable amount of time. Something like 1 minute for the data in this tutorial or 9 minutes for a full size landsat image.
%time clf.fit(X)
# Exercise: Open the dask status dashboard and watch the workers in progress.
labels = clf.assign_labels_.labels_.compute()
labels.shape
Un-flattening¶
Once the computation is done, the output can be used to create a new array with the same structure as the input array. This new output array will have the coordinates needed to be unstacked similarly to how they were stacked. One of the main benefits of using xarray
for this stacking and unstacking is that allows xarray
to keep track of the coordinate information for us.
Since the original array is n_samples by n_features (90_000, 6) and the output only contains one feature (90_000,), the template structure for this data needs to have the shape (n_samples). We achieve this by just taking one of the bands.
template = flattened_t_xda[:, 0]
output_array = template.copy(data=labels)
output_array
With this new output array in hand, we can unstack back to the original dimensions:
unstacked = output_array.unstack()
unstacked
landsat_5_da.sel(band=4).hvplot(x='x', y='y', width=400, height=400, datashade=True, cmap='greys').relabel('Image') + \
unstacked.hvplot(x='x', y='y', width=400, height=400, cmap='Category10', colorbar=False).relabel('Clustered')
Geographic plot¶
The plot above is useful and quick to generate, but it isn't referenced against the underlying geographic coordinates, which is crucial if we want to overlay the data on any other geographic data sources. Adding the coordinate reference system in the hvplot method, ensures that the data is properly positioned in space. This geo-referencing is made very straightforward because of the way xarray
persists metadata. We can even add tiles underneath.
gv.tile_sources.EsriImagery * unstacked.hvplot(x='x', y='y', geo=True, height=500, cmap='Category10', alpha=0.7)
# Exercise: Try adding a different set of map tiles. Use tab completion to find others.
Next:¶
Now that your analysis is complete, you are ready for some more information about Data Visualization you will learn how to visualize the output of this pipeline and diagnose as well as ensure that the inputs to the pipeline have the expected structure.