Exported Functions

Data Preparation

When raw data is available in matrix format, then prepare_data allows for easy conversion and cropping to the format expected by the models. It also accepts inputs compatible to the Tables.jl interface, for example a DataFrame or CSV.File.

FluxArchitectures.prepare_dataFunction
prepare_data(data, poollength, datalength, horizon)
prepare_data(data, poollength, datalength, horizon; normalise=true)

Cast 2D time series data into the format used by FluxArchitectures. data is a matrix or Tables.jl compatible datasource containing data in the form timesteps x features (i.e. each column contains the time series for one feature). poollength defines the number of timesteps to pool when preparing a single frame of data to be fed to the model. datalength determines the number of time steps included into the output, and horizon determines the number of time steps that should be forecasted by the model. The label data is assumed to be contained in the first column. Outputs features and labels.

Note that when horizon is smaller or equal to poollength, then the model has direct access to the value it is supposed to predict.

source

For loading some example data, the following function can be used.

FluxArchitectures.get_dataFunction
get_data(dataset, poollength, datalength, horizon)

Return features and labels from one of the sample datasets in the repository. dataset can be one of :solar, :traffic, :exchange_rate or :electricity. poollength gives the number of timesteps to pool for the model, datalength determines the number of time steps included into the output, and horizon determines the number of time steps that should be forecasted by the model.

See also: prepare_data, load_data

source

The datasets are automatically downloaded when needed. See Datasets for a description.

Models

The following models are exported:

FluxArchitectures.DARNNFunction
DARNN(inp, encodersize, decodersize, poollength, orig_idx)

Create a DA-RNN layer based on the architecture described in Qin et. al., as implemented for PyTorch here. inp specifies the number of input features. encodersize defines the number of LSTM encoder layers, and decodersize defines the number of LSTM decoder layers. poolsize gives the length of the window for the pooled input data, and orig_idx defines the array index where the original time series is stored in the input data,

Data is expected as array with dimensions features x poolsize x 1 x data, i.e. for 1000 data points containing 31 features that have been windowed over 6 timesteps, DARNN expects an input size of (31, 6, 1, 1000).

Takes the keyword arguments init and bias for the initialization of the weight vector and bias of the linear layers.

source
FluxArchitectures.DSANetFunction
DSANet(inp, window, local_length, n_kernels, d_model, d_hid, n_layers, n_head, out=1, drop_prob = 0.1f0, σ = Flux.relu)

Create a DSANet network based on the architecture described in Siteng Huang et. al.. The code follows the PyTorch implementation. inp specifies the number of input features. window gives the length of the window for the pooled input data. local_length defines the length of the convolution window for the local self attention mechanism. n_kernel defines the number of convolution kernels for both the local and global self attention mechanism. d_hid defines the number of "hidden" convolution kernels in the self attention encoder structure. n_layers gives the number of self attention encoders used in the network, and n_head defines the number of attention heads. out gives the number of output time series, drop_prob is the dropout probability for the Dropout layers, and σ defines the network's activation function.

Data is expected as array with dimensions features x poolsize x 1 x data, i.e. for 1000 data points containing 31 features that have been windowed over 6 timesteps, DSANet expects an input size of (31, 6, 1, 1000).

source
FluxArchitectures.LSTnetFunction
LSTnet(in, convlayersize, recurlayersize, poolsize, skiplength)
LSTnet(in, convlayersize, recurlayersize, poolsize, skiplength, Flux.relu)

Create a LSTnet layer based on the architecture described in Lai et. al.. in specifies the number of input features. convlayersize defines the number of convolutional layers, and recurlayersize defines the number of recurrent layers. poolsize gives the length of the window for the pooled input data, and skiplength defines the number of steps the hidden state of the recurrent layer is taken back in time.

Data is expected as array with dimensions features x poolsize x 1 x data, i.e. for 1000 data points containing 31 features that have been windowed over 6 timesteps, LSTNet expects an input size of (31, 6, 1, 1000).

Takes the keyword arguments init for the initialization of the recurrent layers; and initW and bias for the initialization of the dense layer.

source
FluxArchitectures.TPALSTMFunction
TPALSTM(in, hiddensize, poollength)
TPALSTM(in, hiddensize, poollength, layers, filternum, filtersize)

Create a TPA-LSTM layer based on the architecture described in Shih et. al., as implemented for PyTorch by Jing Wang. in specifies the number of input features. hiddensize defines the input and output size of the LSTM layer, and layers the number of LSTM layers (with standard value 1). filternum and filtersize define the number and size of filters in the attention layer. Standard values are 32 and 1. poolsize gives the length of the window for the pooled input data.

Data is expected as array with dimensions features x poolsize x 1 x data, i.e. for 1000 data points containing 31 features that have been windowed over 6 timesteps, TPALSTM expects an input size of (31, 6, 1, 1000).

Takes the keyword arguments initW and bias for the initialization of the Dense layers, and init for the initialization of the StackedLSTM network.

source