In this tutorial we will finish to downscale wind fields with DEVINE

The first part was dedicated to making predictions with the DEVINE convolutional neural network. Now we will replace the predictions on maps.

We will implement the following text from my PhD:

Independently from this operation, NWP large scale horizontal wind components are linearly interpolated onto the target high resolution topographic map. Finally, for each large scale interpolated direction, a lookup function looks at the precomputed acceleration factor and angular deflection in the pre-saved maps obtained with the CNN. Then, scaling is performed as in the ”patch method” and downscaled winds are obtained.
# librairies for the pre-computed method
import rioxarray  # to manipulate geographic data (e.g. projection) with xarray
import imageio  # to create gif
from tqdm import tqdm  # to create progress bars
from scipy.interpolate import griddata  # to interpolate the NWP
# Librairies for DEVINE
import xarray as xr
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Path to module
import sys
sys.path.append("/home/letoumelinl/bias_correction/src/")
sys.path.append("//home/mrmn/letoumelinl/bias_correction/src/")
# Standard librairies
from typing import Callable, Union
import gzip
import gc
import os
import uuid
# Import from my module
from bias_correction.config.config_custom_devine import config
from bias_correction.train.model import CustomModel
from bias_correction.train.dataloader import CustomDataHandler
from bias_correction.utils_bc.context_manager import timer_context
from bias_correction.train.experience_manager import ExperienceManager
from bias_correction.train.wind_utils import wind2comp, comp2speed, comp2dir
Working on local


2023-10-02 10:54:44.184208: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-10-02 10:55:29.457380: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2023-10-02 10:55:29.457407: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2023-10-02 10:55:29.502405: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-10-02 10:55:30.315757: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2023-10-02 10:55:30.315856: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2023-10-02 10:55:30.315864: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2023-10-02 10:55:31.132328: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2023-10-02 10:55:31.132525: W tensorflow/stream_executor/cuda/cuda_driver.cc:263] failed call to cuInit: UNKNOWN ERROR (303)
2023-10-02 10:55:31.132542: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (px27380): /proc/driver/nvidia/version does not exist
2023-10-02 10:55:31.409205: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.

Load data

# The name of the file from the previous tutorial
result_name = "tutorial_pre_computed_method"
acceleration_small_dem_name = "acceleration_tuto_pre_computed_small_dem"
alpha_small_dem_name = "alpha_tuto_pre_computed_small_dem"

# In the folder where you store the NWP files to be downscaled, each file should include the name of the model
#
#├── arome0                            
#│   ├── arome0_2017_01
#│   ├── arome0_2017_02
#│   ├── ...
#│   ├── arome0_2017_12
#│   ├── arome0_2018_01
name_nwp = "arome0"

# Identifier of the experience
uuid_str = str(uuid.uuid4())[:4]

# Are you working on the labia network?
labia = False

# Path to the DEM
if labia:
    large_dem_path= "/scratch/mrmn/letoumelinl/bias_correction/Data/1_Raw/DEM/domaine_grande_rousse_large.nc"
    path_small_dem = "/scratch/mrmn/letoumelinl/bias_correction/Data/1_Raw/DEM/domaine_grande_rousse_small.nc"
else:
    large_dem_path = "/home/letoumelinl/bias_correction/Data/1_Raw/DEM/domaine_grande_rousse_large.nc"
    path_small_dem = "/home/letoumelinl/bias_correction/Data/1_Raw/DEM/domaine_grande_rousse_small.nc"

# If you want to plot intermediate results
_plot = True

# Save path from the file in the previous tutorial
if labia:
    path_pre_computed = f"/scratch/mrmn/letoumelinl/bias_correction/Data/1_Raw/DEM/{result_name}.nc"
    path_acceleration_small_dem = f"/scratch/mrmn/letoumelinl/bias_correction/Data/1_Raw/DEM/{acceleration_small_dem_name}.nc"
    path_alpha_small_dem = f"/scratch/mrmn/letoumelinl/bias_correction/Data/1_Raw/DEM/{alpha_small_dem_name}.nc"
    path_nwp = "/scratch/mrmn/letoumelinl/bias_correction/Data/1_Raw/DEM/AROME_pour_Ange/arome_post_processed/arome0/"
    path_downscaled_speed = f"/scratch/mrmn/letoumelinl/bias_correction/Data/1_Raw/DEM/AROME_pour_Ange/arome_post_processed/arome0/downscaled_speed_{uuid_str}.nc"
    path_downscaled_direction = f"/scratch/mrmn/letoumelinl/bias_correction/Data/1_Raw/DEM/AROME_pour_Ange/arome_post_processed/arome0/downscaled_direction_{uuid_str}.nc"
else:
    path_pre_computed = f"{result_name}.nc"
    path_acceleration_small_dem = f"{acceleration_small_dem_name}.nc"
    path_alpha_small_dem = f"{alpha_small_dem_name}.nc"
    path_nwp = "/home/letoumelinl/bias_correction/Data/1_Raw/DEM/AROME_pour_Ange/arome_post_processed/arome0/"
    path_downscaled_speed = f"/home/letoumelinl/bias_correction/Data/1_Raw/DEM/AROME_pour_Ange/arome_post_processed/arome0/downscaled_speed_{uuid_str}.nc"
    path_downscaled_direction = f"/home/letoumelinl/bias_correction/Data/1_Raw/DEM/AROME_pour_Ange/arome_post_processed/arome0/downscaled_direction_{uuid_str}.nc"

# If wind components are already in the NWP, we don't need to compute them
wind_components_already_in_nwp = False

# Name of the wind components in the NWP (if it is already there), otherwise, this name will be given to the components
key_u = "u"
key_v = "v"

# Name of the wind speed and direction in the output file
key_speed = "uv"
key_dir = "uv_dir"

# Name of the wind speed and direction in the NWP
key_speed_nwp = "Wind"
key_dir_nwp = "Wind_DIR"

# Name of the x and y coordinates and variables in the NWP
dim_x_nwp = "xx"
dim_y_nwp = "yy"
var_x_nwp = "x"
var_y_nwp = "y"

# Name of the x and y variables in the DEM
var_x_dem = "x"
var_y_dem = "y"

print("Take care that either wind components (u and v) or wind speed and direction (uv and uv_dir) are in the NWP variables")

# Do you want to plot intermediate results?
plot_ = True

# Do you want to create a gif with the results
create_gif = False
Take care that either wind components (u and v) or wind speed and direction (uv and uv_dir) are in the NWP variables
# epsg are information about projections used (lat/lon, Lambert93...)
# here projections of the DEM are in lambert93, so we will use epsg:2154
epsg_large_dem = "EPSG:2154"
epsg_small_dem = "EPSG:2154"

Load DEM

# Load the DEM as in the previous tutorial
large_dem = xr.open_dataarray(large_dem_path)

try:
    dem_array = np.expand_dims(large_dem.isel(band=0).values, axis=-1)
except AttributeError:
    dem_array = np.expand_dims(large_dem.isel(band=0).band_data.values, axis=-1)

small_dem = xr.open_dataarray(path_small_dem)
# Specify the projection of the dem
large_dem = large_dem.rio.write_crs(epsg_large_dem)
small_dem = small_dem.rio.write_crs(epsg_small_dem)
dem_array.shape
(3590, 4047, 1)
# as in the previous tutorial
config["custom_input_shape"] = list(dem_array.shape)
# Plotting utility class that we will use later
import matplotlib as mpl
class MidpointNormalize(mpl.colors.Normalize):
    def __init__(self, vmin, vmax, midpoint=0, clip=False):
        self.midpoint = midpoint
        mpl.colors.Normalize.__init__(self, vmin, vmax, clip)

    def __call__(self, value, clip=None):
        normalized_min = max(0, 1 / 2 * (1 - abs((self.midpoint - self.vmin) / (self.midpoint - self.vmax))))
        normalized_max = min(1, 1 / 2 * (1 + abs((self.vmax - self.midpoint) / (self.midpoint - self.vmin))))
        normalized_mid = 0.5
        x, y = [self.vmin, self.midpoint, self.vmax], [normalized_min, normalized_mid, normalized_max]
        return np.ma.masked_array(np.interp(value, x, y))

norm = MidpointNormalize(vmin=0, vmax=5, midpoint=1)
def activation_arctan(scaled_wind: Union[np.ndarray, float], alpha: float = 38.2):
    return alpha*np.arctan(scaled_wind/alpha)

Replace predictions on large_dem

There is a cropping operation in DEVINE after the first rotations. Here, we compute the cropping parameters to replace predictions on their initial location.

We select the dimensions of the DEM

length_y = config["custom_input_shape"][0]
length_x = config["custom_input_shape"][1]
print("length_x, length_y")
print(length_x, length_y)
length_x, length_y
4047 3590

Then we compute the cropping area. This area is computed within DEVINE. We do it again to replace predictions on the original DEM.

min_length is the smallest edge of the input DEM

# Here we mimic the behavior of the CropTopography Layer
min_length = np.min([length_y, length_x])
print("\nmin_length")
print(min_length)
min_length
3590

y_diff is half the minimum length of the square where we are sure that rotation does not create nans.

y_diff = np.intp((min_length/np.sqrt(2))) // 2
x_diff = y_diff
print("\ny_diff")
print(y_diff)
y_diff
1269

Then we select the center of the DEM and extract a square of maximal shape where we are sure no nans are created.

y_offset_left = length_y//2 - y_diff
y_offset_right = length_y//2 + y_diff + 1
print("y_offset_left, y_offset_right")
print(y_offset_left, y_offset_right)

x_offset_left = length_x//2 - x_diff
x_offset_right = length_x//2 + x_diff + 1
print("x_offset_left, x_offset_right")
print(x_offset_left, x_offset_right)
y_offset_left, y_offset_right
526 3065
x_offset_left, x_offset_right
754 3293

Acceleration

We will replace the accelerations computed on the cropped domain onto the initial DEM.

This is done by creating an “angles” dimension in the large_dem dataset.

%%time

# Open the file
results = xr.open_dataset(path_pre_computed)

# Create the "angles" dimension
large_dem = large_dem.expand_dims({"angles": 360})
CPU times: user 1.54 ms, sys: 14.9 ms, total: 16.4 ms
Wall time: 15.6 ms

Xarray creates a “band” coordinate, which is not required in our case. We remove it by selecting the band where the data is located.

%%time

# Select band 0
large_dem = large_dem.isel(band=0)
CPU times: user 137 µs, sys: 69 µs, total: 206 µs
Wall time: 211 µs

Once the new dimension is created, we can rename the dataset and add the acceleration as a new variable in the dataset.

%%time

# Convert to dataset because we were working with a DataArray
large_dem = large_dem.to_dataset(name="alti")

# Initialize the acceleration variable
large_dem["acceleration"] = (("angles", "y", "x"), np.zeros((360, length_y, length_x), dtype=np.float32))

# Store the results
large_dem["acceleration"].values[:, y_offset_left:y_offset_right, x_offset_left:x_offset_right] = np.float32(results.acceleration.values)
CPU times: user 28.7 s, sys: 4.39 s, total: 33.1 s
Wall time: 33.2 s
Free memory

We will free memory from the RAM by calling the garbage collector. This is a trick when working on notebooks. When functions properly wrap intermediate results, this should not be required. Future work will embed the steps presented in this tutorial into function to circumvent the issue of garbage collection.

del dem_array
gc.collect()
81

Clip the large dem on the small DEM domain

On the large DEM domain, we have more data than necessary. To reduce computation time and memory consumption, we reduce the size of the large_dem to a size similar to the small DEM. We use a small margin that will be important for interpolation at the borders of the small DEM.

%%time

large_dem = large_dem.rio.clip_box(
    minx=small_dem.x.min()-100, 
    miny=small_dem.y.min()-100, 
    maxx=small_dem.x.max()+100,
    maxy=small_dem.y.max()+100)
CPU times: user 258 ms, sys: 1.16 s, total: 1.42 s
Wall time: 1.46 s

According to rioxarray, “reproject_match” does the following operation:

Reproject a DataArray object to match the resolution, projection, and region of another DataArray.
%%time

# resampling = 1 designates nearest interpolation
large_dem = large_dem.rio.reproject_match(small_dem, resampling=1)
CPU times: user 35.9 s, sys: 1.94 s, total: 37.9 s
Wall time: 38 s
if plot_:
    direction = 270
    plt.figure(figsize=(15, 5))
    plt.subplot(121)
    ax = plt.gca()
    small_dem.plot()
    plt.xlabel("")
    plt.ylabel("")
    ax.set_xticks([])
    ax.set_yticks([])
    cb = ax.collections[-1].colorbar
    cb.set_label('Elevation [m]', fontsize=15)
    plt.title("Digital Elevation Model")

    plt.subplot(122)
    large_dem.isel(angles=direction).acceleration.plot(norm=norm, 
                                                       cmap="seismic")
    ax = plt.gca()
    plt.xlabel("")
    plt.ylabel("")
    ax.set_xticks([])
    ax.set_yticks([])
    cb = ax.collections[-1].colorbar
    u,v = wind2comp(1_000, direction, "degree")
    ax.quiver(955000, 6450000, u, v, scale=1/0.0002, color="C1", width=0.02)

    cb.set_label('DEVINE acceleration factor []', fontsize=15)
    plt.title(f"Large scale wind direction: {direction}°")
    plt.show()

png

if create_gif:
    for direction in range(360):
        plt.figure(figsize=(15, 5))
        plt.subplot(121)
        ax = plt.gca()
        small_dem.plot()
        plt.xlabel("")
        plt.ylabel("")
        ax.set_xticks([])
        ax.set_yticks([])
        cb = ax.collections[-1].colorbar
        cb.set_label('Elevation [m]', fontsize=15)
        plt.title("Digital Elevation Model")

        plt.subplot(122)
        large_dem.isel(angles=direction).acceleration.plot(norm=norm, 
                                                           cmap="seismic")
        ax = plt.gca()
        plt.xlabel("")
        plt.ylabel("")
        ax.set_xticks([])
        ax.set_yticks([])
        cb = ax.collections[-1].colorbar
        u,v = wind2comp(1_000, direction, "degree")
        ax.quiver(955000, 6450000, u, v, scale=1/0.0002, color="C1", width=0.02)

        cb.set_label('DEVINE acceleration factor []', fontsize=15)
        plt.title(f"Large scale wind direction: {direction}°")
        
        plt.savefig(f'grande_rousses_gif/img_{direction}.png', 
                    transparent = False,  
                    facecolor = 'white')
    frames = []
    for direction in range(360):
        frames.append(imageio.v2.imread(f'grande_rousses_gif/img_{direction}.png'))

    imageio.mimsave('grande_rousses_gif/example.gif', 
                    frames,
                    fps = 360/10)

Save intermediate result (accelerations)

%time
large_dem.to_netcdf(path_acceleration_small_dem)
CPU times: user 3 µs, sys: 0 ns, total: 3 µs
Wall time: 5.96 µs
del large_dem
gc.collect()
109

alpha

We will do the same with the angular deviations computed by DEVINE

Why do we compute accelerations and angular deviations in two distinct steps? This is because it would not fit in the memory of my laptop when I try to use to do both at the same time.

%time
results = xr.open_dataset(path_pre_computed)
CPU times: user 3 µs, sys: 1e+03 ns, total: 4 µs
Wall time: 6.91 µs

We do the same operations as we did for acceleration.

%time
large_dem = xr.open_dataarray(large_dem_path)
large_dem = large_dem.rio.write_crs(epsg_large_dem)
large_dem = large_dem.expand_dims({"angles": 360})
large_dem = large_dem.isel(band=0)
large_dem = large_dem.to_dataset(name="alti")
large_dem["alpha"] = (("angles", "y", "x"), np.zeros((360, length_y, length_x), dtype=np.float32))
CPU times: user 3 µs, sys: 1e+03 ns, total: 4 µs
Wall time: 6.68 µs
%time
large_dem["alpha"].values[:, y_offset_left:y_offset_right, x_offset_left:x_offset_right] = np.float32(results.alpha.values)
CPU times: user 3 µs, sys: 1e+03 ns, total: 4 µs
Wall time: 6.91 µs
%time
large_dem = large_dem["alpha"].to_dataset()
CPU times: user 3 µs, sys: 0 ns, total: 3 µs
Wall time: 6.2 µs
%time
del results
gc.collect()
CPU times: user 3 µs, sys: 1 µs, total: 4 µs
Wall time: 6.2 µs





66
%time
large_dem = large_dem.rio.clip_box(
    minx=small_dem.x.min()-100, 
    miny=small_dem.y.min()-100, 
    maxx=small_dem.x.max()+100,
    maxy=small_dem.y.max()+100)
gc.collect()
large_dem = large_dem.rio.reproject_match(small_dem, resampling=1)
large_dem["alpha"] = np.rad2deg(large_dem["alpha"], dtype=np.float32)
large_dem.to_netcdf(path_alpha_small_dem)
CPU times: user 5 µs, sys: 0 ns, total: 5 µs
Wall time: 10.3 µs

Load acceleration and apha on small_dem

acceleration = xr.open_dataset(path_acceleration_small_dem)
alpha = xr.open_dataset(path_alpha_small_dem)
plot_ = True
if plot_:
    direction = 270
    plt.figure(figsize=(20, 5))
    plt.subplot(131)
    ax = plt.gca()
    small_dem.plot()
    plt.xlabel("")
    plt.ylabel("")
    ax.set_xticks([])
    ax.set_yticks([])
    cb = ax.collections[-1].colorbar
    cb.set_label('Elevation [m]', fontsize=15)
    plt.title("Digital Elevation Model")

    
    
    plt.subplot(132)
    norm = MidpointNormalize(vmin=-90, vmax=90, midpoint=0)
    alpha.isel(angles=direction).alpha.plot(norm=norm, cmap="seismic")
    ax = plt.gca()
    plt.xlabel("")
    plt.ylabel("")
    ax.set_xticks([])
    ax.set_yticks([])
    cb = ax.collections[-1].colorbar
    u,v = wind2comp(1_000, direction, "degree")
    ax.quiver(955000, 6450000, u, v, scale=1/0.0002, color="C1", width=0.02)
    cb.set_label('DEVINE deviation [°]', fontsize=15)
    plt.title(f"Large scale wind direction: {direction}°")
    
    
    
    plt.subplot(133)
    norm = MidpointNormalize(vmin=0, vmax=5, midpoint=1)
    acceleration.isel(angles=direction).acceleration.plot(norm=norm, cmap="seismic")
    ax = plt.gca()
    plt.xlabel("")
    plt.ylabel("")
    ax.set_xticks([])
    ax.set_yticks([])
    cb = ax.collections[-1].colorbar
    u,v = wind2comp(1_000, direction, "degree")
    ax.quiver(955000, 6450000, u, v, scale=1/0.0002, color="C1", width=0.02)
    
    cb.set_label('DEVINE acceleration factor []', fontsize=15)
    plt.title(f"Large scale wind direction: {direction}°")

png

plot_ = True
norm = MidpointNormalize(vmin=-90, vmax=90, midpoint=0)
if plot_:
    direction = 270
    plt.figure(figsize=(15, 5))
    plt.subplot(121)
    ax = plt.gca()
    small_dem.plot()
    plt.xlabel("")
    plt.ylabel("")
    ax.set_xticks([])
    ax.set_yticks([])
    cb = ax.collections[-1].colorbar
    cb.set_label('Elevation [m]', fontsize=15)
    plt.title("Digital Elevation Model")

    plt.subplot(122)
    large_dem.isel(angles=direction).alpha.plot(norm=norm, 
                                                       cmap="seismic")
    ax = plt.gca()
    plt.xlabel("")
    plt.ylabel("")
    ax.set_xticks([])
    ax.set_yticks([])
    cb = ax.collections[-1].colorbar
    u,v = wind2comp(1_000, direction, "degree")
    ax.quiver(955000, 6450000, u, v, scale=1/0.0002, color="C1", width=0.02)

    cb.set_label('DEVINE deviation [°]', fontsize=15)
    plt.title(f"Large scale wind direction: {direction}°")

png

del large_dem
gc.collect()
24167

Interpolate and replace for all files

At this point, we have a collection of accelerations and angular deviations on the small_dem as simulated by DEVINE for each large scale direction possible.

The next step consists of looking at the real direction simulated by AROME, and look for that specific direction the acceleration and angular deviation simulated by DEVINE.

After finding the acceleration factor and angular deviation term, we mutiply/add it to the speed/direction simulated by AROME at the point of interest.

This is done by computing the following steps:

1. Load the pre-computed acceleration and angular deviations on the small_dem

2. Iterate each file from the NWP that we want to downscale.

3. Compute the wind components in the NWP file given the wind speed and direction

4. Select the minimal amount of data by keeping only the required variables and cropping AROME simulation to the domain of interest

5. Interpolate AROME on the small_dem grid using linear interpolation. You can choose another interpolation method but it can't be the nearest neighbor interpolation method because it will create chessboard-like patterns.

6. Compute wind speed and direction from the interpolated wind components

7. Round the direction values (since we precomputed only rounded large scale direction). For example we precomputed rotation with an angle of 21° and 22° but not 21.5°.

8 Substract alpha values to the NWP direction (if you look at how alpha is computed, you will see that we need to substract it from the wind direction and not add it, but it's a technical detail) and multiply NWP wind speeds to acceleration ratios.

9 Save the file.

For keeping RAM low, I do the speed and direction computations separately. Otherwise, I would have to load both files at the same time and it would crash on my computer. On a larger computer, you could merge these steps and have a faster implementation.

# Load acceleration and angular deviations
acceleration = xr.open_dataset(path_acceleration_small_dem)
alpha = xr.open_dataset(path_alpha_small_dem)

# We iterate the NWP files to downscale
for file in tqdm(os.listdir(path_nwp), position=0, leave=True, colour='green', ncols=80):
    
    print("\nfile: ")
    print(file)
    
    # We downscale file after file and save results for each downscaled file
    # for that change the name of the original file
    name_devine_direction = file.replace(name_nwp, "devine_direction")
    name_devine_speed = file.replace(name_nwp, "devine_speed")
    
    # Check that the file is not already downscaled (for example if you launch this script several times in the same day)
    downscaled_direction_computed = name_devine_direction in os.listdir(path_nwp)
    downscaled_speed_computed = name_devine_speed in os.listdir(path_nwp)
    
    # If the file is already downscaled, we go to the next file
    if downscaled_direction_computed and downscaled_speed_computed:
        print(f"{file} already computed")
        continue
    else:
        print("Start computing")
    
    # Load small DEM
    small_dem = xr.open_dataarray(path_small_dem)

    # Load nwp file
    nwp = xr.open_dataset(path_nwp+file)
    
    # Rename the dataset
    small_dem = small_dem.to_dataset(name="alti")
    small_dem = small_dem.expand_dims({"time": len(nwp.time)})
    
    # Compute u and v in order to interpolate
    if not wind_components_already_in_nwp:
        nwp[key_u], nwp[key_v] = wind2comp(nwp[key_speed_nwp], nwp[key_dir_nwp], "degree")
    
    # Convert to float32 to save memory
    nwp[key_u] = nwp[key_u].astype(np.float32)
    nwp[key_v] = nwp[key_v].astype(np.float32)
    
    # We keep only the minimum number of variables
    nwp = nwp[[var_x_nwp, var_y_nwp, key_u, key_v]]
    
    # Limit the nwp grid to a small domain
    margin = 5500  # margin is taken very large, but this is not necessary.
    x_min = small_dem[var_x_dem].min() - margin
    x_max = small_dem[var_x_dem].max() + margin
    y_min = small_dem[var_y_dem].min() - margin
    y_max = small_dem[var_y_dem].max() + margin
    
    # Crop nwp to release memory
    nwp = nwp.where((nwp[var_x_nwp] > x_min) & 
                (nwp[var_x_nwp] < x_max) & 
                (nwp[var_y_nwp] > y_min) & 
                (nwp[var_y_nwp] < y_max) 
               ).dropna(dim=dim_x_nwp, how="all").dropna(dim=dim_y_nwp, how="all")
    
    # Initialize interpolation
    size_t = small_dem.time.shape[0]
    size_y = small_dem[var_y_dem].shape[0]
    size_x = small_dem[var_x_dem].shape[0]

    u_interpolated = np.zeros((size_t, size_y, size_x), dtype=np.float32)
    v_interpolated = np.zeros((size_t, size_y, size_x), dtype=np.float32)
    
    # Create the regular grid using coordinates from small_dem
    x, y = np.meshgrid(small_dem[var_x_dem].values, small_dem[var_y_dem].values)
    
    # Interpolate nwp with a method for irregular grid
    with timer_context("Interpolating NWP"):
        for time in tqdm(range(size_t), position=0, leave=False, colour="cyan", ncols=80):
            x_nwp = nwp.isel(time=time)[var_x_nwp].values.flatten()
            y_nwp = nwp.isel(time=time)[var_y_nwp].values.flatten()
            u_nwp = nwp[key_u].isel(time=time).values.flatten()
            v_nwp = nwp[key_v].isel(time=time).values.flatten()
            u_interpolated[time, :, :] = np.float32(griddata((x_nwp, y_nwp), u_nwp, (x, y), method='linear'))
            v_interpolated[time, :, :] = np.float32(griddata((x_nwp, y_nwp), v_nwp, (x, y), method='linear'))
    
    # Garbage collector to release memory
    del nwp
    gc.collect()
    
    # Replace interpolated data in the dem grid
    small_dem[key_u] = (("time", var_y_dem, var_x_dem), u_interpolated)
    small_dem[key_v] = (("time", var_y_dem, var_x_dem), v_interpolated)
    
    # Garbage collector to release memory
    del u_interpolated
    del v_interpolated
    gc.collect()

    # Drop alti to save memory
    small_dem = small_dem.drop("alti")
    
    # Compute comp2speed and comp2dir
    with timer_context("comp2speed and comp2dir"):
        small_dem[key_speed] = comp2speed(small_dem[key_u], small_dem[key_v])
        small_dem[key_dir] = comp2dir(small_dem[key_u], small_dem[key_v])
    
    # Drop u and v to save memory
    small_dem = small_dem.drop([key_u, key_v])
    
    # Round nwp directions (if not, we can not search for accelerations given a rounded direction).
    small_dem[key_dir] = np.round(small_dem[key_dir])
    
    # Create copies of nwp speed and directions that will be modified with DEVINE accelerations 
    # and angular deviations. It is important to do deep copy.
    downscaled_uv = small_dem[key_speed].copy(deep=True)
    downscaled_direction = small_dem[key_dir].copy(deep=True)
    
    # Replace direction. This can be very long (2 hours for 2 months at an hourly time step) if computed on 
    # a very fine small dem (30m).
    with timer_context("Replace devine directions"):
        for direction in tqdm(range(360), position=0, leave=False, colour="magenta", ncols=80):
            filter_direction = small_dem[key_dir] == direction
            downscaled_direction = xr.where(filter_direction,
                                            np.mod(small_dem[key_dir] - alpha.isel(angles=direction).alpha, 360),
                                            downscaled_direction)
    # Save devine directions        
    downscaled_direction.to_netcdf(path_downscaled_direction)
    print(f"Saved file to disk {path_downscaled_direction}")
    
    # Replace speed
    # Note: this could be done simultaneously to replace directions, 
    # but this lead to overconsumption of memory on my computer
    with timer_context("Replace devine speed"):
        for direction in tqdm(range(360), position=0, desc="j", colour='red', ncols=80):
            filter_direction = small_dem[key_dir] == direction
            downscaled_uv = xr.where(filter_direction,
                                     small_dem[key_speed]*acceleration.isel(angles=direction).acceleration,
                                     downscaled_uv)
    
    # Arctan function, cf publication "Emulating..."
    downscaled_uv = activation_arctan(downscaled_uv, alpha=38.2)
    print("arctan worled")
    
    # Save devine speeds
    downscaled_uv.to_netcdf(path_downscaled_direction)
    print(f"Saved file to disk {path_downscaled_direction}")
    
    # Delete variables before the next iteration to release memory
    del small_dem
    del downscaled_direction
    del downscaled_uv
    gc.collect()
  0%|[32m                                                    [0m| 0/33 [00:00<?, ?it/s][0m


file: 
arome0_2017_8_1_6___2017_9_11_22.nc
Start computing
debug
<xarray.Dataset>
Dimensions:  (yy: 225, xx: 175, time: 1001)
Coordinates:
  * time     (time) datetime64[ns] 2017-08-01T06:00:00 ... 2017-09-11T22:00:00
Dimensions without coordinates: yy, xx
Data variables:
    x        (yy, xx) float64 ...
    y        (yy, xx) float64 ...
    u        (time, yy, xx) float32 -4.577 -5.493 -6.159 ... 2.468 2.062 0.09952
    v        (time, yy, xx) float32 1.78 1.273 0.7875 ... 0.6974 1.25 0.7553
Attributes:
    made_with:                 epygram-1.3.6
    nco_openmp_thread_number:  1
    history:                   Thu May 23 09:34:48 2019: ncrcat -O FORCING_da...
    NCO:                       "4.5.5"
Begin Interpolating NWP ...


                                                                                9.62it/s][0m

Time to calculate Interpolating NWP: 1.75 minutes
Begin comp2speed and comp2dir ...
Time to calculate comp2speed and comp2dir: 0.53 minutes
Begin Replace devine directions ...


 28%|[35m███████████▌                             [0m| 101/360 [20:40<53:09, 12.32s/it][0m