Recent work on paraphrasing has drawn attention to the importance of context-sensitive similarity. We develop a model for computing similarity in context inspired by Latent Dirichlet Allocation (LDA). LDA is a generative probabilistic model for modelling collections of documents by assuming "topics" as latent variables. We use this to model instead a collection of patterns such as "X find solution to Y", where each such pattern is described by its X and Y fillers in a large corpus. Our latent variables are "meanings", as we assume that each such pattern is a mixture over meanings, and each meaning is characterized by a distribution over X-filler and Y-filler words. This way, we obtain for each pattern a vector representation over the total set of meanings; from this we can compute similarity of patterns, as well as similarity of two patterns conditioned on a context (X and Y fillers). Preliminary experiments show that the method outperforms the DIRT algorithm as well as multiplication and addition composition methods for vector space models.