Seismic waveform inversion is one of many geophysical problems which can be identified as a nonlinear multiparameter optimization problem. Methods based on local linearization fail if the starting model is too far from the true model. We have investigated the applicability of 'Genetic Algorithms' (GA) to the inversion of plane-wave seismograms. Like simulated annealing, genetic algorithms use a random walk in model space and a transition probability rule to help guide their search. However, unlike a single simulated annealing run, the genetic algorithms search from a randomly chosen population of models (strings) and work with a binary coding of the model parameter set. Unlike a pure random search, such as in a 'Monte Carlo' method, the search used in genetic algorithms is not directionless. Genetic algorithms essentially consist of three operations, selection, crossover, and mutation, which involve random number generation, string copies, and some partial string exchanges. The choice of the initial population, the probabilities of crossover and mutation are crucial for the practical implementation of the algorithm. We investigated the effects of these parameters in the inversion of plane-wave seismograms in which a normalized crosscorrelation function was used as the objective or fitness function (E). We also introduce the concept of 'update' probability to control the influence of past generations. The combination of a low value of mutation probability ( approximately 0.01), a moderate value of the crossover probability ( approximately 0.6) and a high value of update probability ( approximately 0.9) are found to be optimal for the convergence of the algorithm. Further, we show that concepts from simulated annealing can be used effectively for the stretching of the fitness function which helps in the convergence of the algorithm. Thus, we propose to use exp (E/T) rather than E as the fitness function, where T (analogous to temperature in simulated annealing) is a properly chosen parameter which can change slowly with each generation. Also, by repeating the GA optimization procedure several times with different randomly chosen initial model populations, we derive 'a very good subset' of models from the entire model space and calculate the a posteriori probability density sigma (m) varies as exp (E(m)/T). The sigma (m)'s are then used to calculate a 'mean' model, which is found to be close to the true model.