Neural network advantages: capable of approximating any continuous function of arbitrary numbers of variables (see Cybenko theorem). Well-suited to machine learning applications, as weights can be adjusted fairly easily; additionally, training can be performed in a range of ways, including pairing with a genetic algorithm to determine appropriate output even if it is difficult to produce a specification for proper output by hand. The neural network is Turing-equivalent (Siegelmann and Sontag). Certain papers located by this student suggest that the addition of irrational weights permits the construction of a sparse oracle machine, though the useful implication of this is unclear; one would think that a viable path to an oracle machine would be much more well-known, so it is presumed that something limits the utility of this construction, either by difficulty or possible impossibility of implementation, or by constraint of solvable Turing-noncomputable functions on the oracle machine thus generated. Neural network disadvantages: as with any nontrivial use of machine learning techniques, use of a learning neural network may result in furthering human capability in a noninformative manner--that is to say, while a network may acquire the ability to solve a problem, no human is left with the knowledge of how to solve that problem without nontrivial further effort; the relation between input and output remains, as noted in a study of the use of ANNs for traffic safety modeling, essentially a black box. Similarly, to use the learning capability of an ANN, the ANN must be trained, typically with a large sample of data; when data is unavailable, it may not be possible to effectively train an ANN to deal with the problem at hand without excessive human interaction. For data mining in particular, other techniques can learn much more rapidly, requiring smaller datasets.