Trading Articles

How to Design an Artificial Neural Trading System By Lou Mendelsohn

Rate this post

Unlike technical trading systems popular in the 1980s, artificial neural trading systems use an iterative “training” process to forecast prices and trading signals without rule-based “optimization” of system parameters or technical indicators. Instead, neural systems “learn” the hidden relationships within selected technical and fundamental data that are predictive of a specific market’s future price level.

This article examines the steps to follow in applying neural computing technology to the financial markets. First, you need to specify the output that you want to forecast. You should identify the appropriate input data that the system needs in order to generate an accurate forecast. Then the type, size, and structure of your neural system must be defined. Finally, the system has to be trained and then tested before it can be used as a predictive tool in real-time trading. Most financial neural systems or “neural networks” generate real numbers in the form of forecasted prices, or classifications such as buy/sell signals or trend directions as their projected outputs.

Read Book: The Encyclopedia of Trading Strategies

Input data should be selected based on its relevance to the output that you want to forecast. Unlike conventional technical trading systems, neural systems work best when both technical and fundamental input data are used. The more input data, the better the system can discriminate the hidden underlying patterns that affect its productiveness.

How to Design an Artificial Neural Trading SystemBefore you train the system, the data should be preprocessed or “massaged”, since neural systems work better with relative numbers, rather than absolute numbers. For instance, it is preferable to use changes in price levels rather than actual daily prices as your inputs and output. Neural systems consist of one or more interconnected layers of neurons. In a typical system there are three types of layers: an input layer, a hidden layer, and an output layer.

One choice of system architecture successfully applied to financial forecasting is known as a feed-forward network with back-propagation supervised learning. This design has two or more layers. Neurons within a layer are not interconnected, while neurons in one layer receive inputs from each neuron in the previous layer and send outputs only to each neuron in the following layer. This is accomplished by assigning connection weights or strengths to the connections.

The input layer receives input data. The number of neurons in this layer is determined by how many different data categories are used, with each category taking up one input neuron. For instance, in a Tbond price forecasting system, if your input data includes each day’s closing price spread between TBonds and the DMark, Japanese Yen, Tbills, Eurodollar, Swiss Franc and Dollar Index, as well as the Fed Funds Rate and Dow Jones Utility Average (a total of eight categories of data), initially the input layer would be comprised of eight neurons. If you preprocess the data by taking a one-day momentum on the closing price of each of these markets, or smooth the time series with moving averages, you will increase the number of input neurons accordingly.

See also  Turtle-Style Building Blocks By Curtis Faith

Depending on the number of input markets and the extensiveness of the preprocessing, it is not uncommon for a neural system to have several hundred input neurons. With supervised learning, each day’s input data provided to the system during training would also include the next day’s Tbond prices. Before training you should randomly shuffle the paired input data, so that the data is not presented to the system chronologically. The hidden layer neurons do not interact directly with the outside world. This is where the network creates its internal symbol set to record the input data into a form that captures the hidden relationships within the data, allowing the system to generalize.

Selecting the appropriate number of neurons in the hidden layer and the number of hidden layers to use, is often found through experimentation. Too few neurons prevent the system from training. If too many neurons are selected, the system memorizes the hidden patterns without being able to generalize. Then, if it is subsequently presented with different patterns, it will be unable to forecast accurately because it has not discerned the hidden relationships.

The format of the output that you want to forecast determines the number of output neurons needed. Each output category uses one neuron. If you want to predict the next day’s high, low, midpoint, and a buy, sell, or stand aside signal for Tbonds, the system would need six output layer neurons. During training, the system’s forecasted output of the next day’s Tbond prices and signal is compared with their known values. Forecasting errors are used to modify each neuron’s connection strength or weight, so that during subsequent training iterations, the system’s forecast will be closer to the actual value. The “learning law” for a given network governs how to modify these connection weights to minimize output errors during later training iterations. While there are many learning laws that can be applied to neural systems, one of the most popular ones is the Generalized Delta Rule or Back-propagation method.

See also  An Introduction to the Methods of W.D Gann: Part 2 By Bryce T. Gilmore

During each iteration of training, the paired data presented to the network generates a forward flow of activation from the input to the output layer. Then, if the outputs forecasted by the system are incorrect, a flow of information is generated backward from the output layer to the input layer, adjusting the connection weights. Then, on the next training iteration, when the system is presented with the same paired input data, it will be more accurate in its forecast. The time necessary to perform training can be considerable depending on your computer’s speed, the number of days of data (known as “fact-days”), and the number of neurons in each layer. When the system reaches a stable state, it is ready for further testing. You can perform “walk-forward” testing by creating a testing file comprised of fact-days which were not used during training. Depending on the test results, you may need to redesign the system, including its architecture, learning law, input data, or methods and extent of preprocessing. You may even need to change the forecasted output that you want to predict. Unlike training, during testing the connection strengths are not adjusted to compensate for errors.

If your system can not train on certain paired data, it may contain contradictory or ambiguous information. You should reexamine each of your data inputs or eliminate redundant input data massaging methods before retraining. Once your network has trained successfully, it is easy for it to forecast the expected output in real-time. All you have to do is provide it with the necessary input data, just as you did during training. However, as with testing, no adjustments are made to the connection strengths. You should consider retraining your system periodically, experimenting with different data and massaging techniques. Neural trading systems represent a major milestone in the development of analytic tools for time series forecasting in the financial markets. With the ability to develop flexible, adaptive trading systems, which do not rely on predefined trading rules to model the markets, this “sixth generation” technology promises to bridge the gap between technical and fundamental analysis. It brings them together into a combined trading strategy that fully recognizes the impact of intermarket analysis” in the global markets of the 1990s.

Leave a Reply

Your email address will not be published. Required fields are marked *