This talk will present some speech recognition results obtained with two microphone array speech corpora, the noisy CMU multi-microphone corpus and an artificially created reverberant array database based on the CSR-I (WSJ0) corpus. I will discuss the influence of noise and reverberation time on word error rates obtained when testing single channel data with a standard ASR System trained on clean or reverberant data. A second line of investigation considers the potential usefulness of employing several channels in the recognition process. Various experiments demonstrate the effects of (delay-and-sum) beamforming, as well as the degree of agreement and complementarity between recognizer transcriptions based on different channels.