Todays speech recognition systems perform very well when employed in close talking scenarios where the user speaks directly into a microphone (headset, lapel, etc). Since performance dramatically decreases in the presence of background noises and reverberation it would be good if we could "get rid" of them. That is what speech feature enhancement tries to achieve by mapping distorted or noisy speech features to clean ones. In this talk, I will present a method that sequentially estimates the evolution of time-varying noises in order to remove the speech feature distortions caused by them. The presentation will be very down to earth, i.e. depict things rather graphically than by the help of equations.