Hyper neuron - One neuron with infinite states

Research output: Chapter in Book or Conference Publication/ProceedingConference Publicationpeer-review

Abstract

Neural networks learn by creating a decision boundary. The shape and smoothness of the decision boundary is ultimately determined by the activation function and the architecture. As the number of neurons in an artificial neural network increase to infinity, the decision function becomes smooth. A network with an infiniete number of neurons is impossible to implement with finite resources, but the behavior of such a network can be modeled to an arbitrary degree of precision by using standard numerical techniques. We named the resulting model Hyper Neuron. A flexible characteristic function controlling the rate of variations in the weights of these neurons is used. The Hyper Neuron does not require any assumptions about the parameter distribution. It utilizes a numerical methodology that contrasts with previous work (such as infinite neural networks) which relies on assumptions about the distribution. In the classical model of a neuron, each neuron has a single state and output which is determined by an input, the weights, and the bias. Consider a neuron with more than one distinct output for the same input. A layer made from an infinite number of these neurons can be modeled as a single neuron with an infinite number of states and an infinite weight field. This kind of neuron is called a “Hyper Neuron” indicated by symbolQ. Now consider the independent variable x where the Hyper Neuron is defined over it i.e., the functionQ(x) is defined over the space x ∈ ℝN To model the data in such a way that it represents the target distribution, we use weighted inputs and non-linearity functions, where the weights are not vectors but instead are multidimensional functions (Formula Presented) which define the weight field between two Hyper Neurons when the input is given by another Hyper Neuron. This function is simplified as (Formula Presented) when the input is a feature space or a conventional layer. In these equations k is the previous layer, k + 1 is the layer with the Hyper Neuron, ik is the ith element in the previous layer and p represents parameters of the weight field function. Hyper Neurons follow naturally from numerical models involving an infinite number of conventional neurons in a single layer and the associated weight fields are described by characteristic functions. To validate this idea, experiments were performed that used sinusoidal functions for the weight fields, because they allowed rapid changes due to the inclusion of their frequency as a parameter that the network learned from the input data. A comparison between the proposed model and a conventional model containing up to 7 neurons was performed.

Original languageEnglish
Title of host publicationArtificial Neural Networks and Machine Learning – ICANN 2017 - 26th International Conference on Artificial Neural Networks, Proceedings
EditorsPaul F. Verschure, Alessandra Lintas, Alessandro E. Villa, Stefano Rovetta
PublisherSpringer-Verlag
Pages432-433
Number of pages2
ISBN (Print)9783319685991
Publication statusPublished - 2017
Event26th International Conference on Artificial Neural Networks, ICANN 2017 - Alghero, Italy
Duration: 11 Sep 201714 Sep 2017

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume10613 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference26th International Conference on Artificial Neural Networks, ICANN 2017
Country/TerritoryItaly
CityAlghero
Period11/09/1714/09/17

Keywords

  • Artificial neural networks
  • Infinite neural networks
  • Numerical modeling

Fingerprint

Dive into the research topics of 'Hyper neuron - One neuron with infinite states'. Together they form a unique fingerprint.

Cite this