We address the problem of training semantic role labeling (SRL) models for resource-poor languages. SRL-annotated resources of reasonable size are now available for several more popular languages, but the task if far from being solved. Firstly, while state-of-the-art SRL models show impressive performance on in-domain data, the quality of the automatically produced annotations on real-world data turns out to be much lower. Secondly, many different annotation schemes, often incompatible, have emerged in the preparation of resources for different languages. Combining these resources in order to simplify the construction of SRL models for resource-poor languages is by no means trivial. Thirdly, while the problem of automatic (or mostly automatic) creation of annotated resources for new languages has received much attention – largely in the context of cross-lingual annotation projection, – very few actual corpora made use of such techniques to facilitate the annotation process. One may in part attribute this to the fact that annotation-projection approaches often require extensive filtering or manual modifications to the algorithm to fit the language pair in question. In this talk we will briefly describe our recent work on bootstrapping semantic labeling models from parallel data and then focus on an ongoing project to adapt a semantic role labeling model to a different language directly by using cross-lingual word representations, which recently became available. Such an approach would provide a way of constructing a first-approximation SRL model for a new language with little or no tweaking, using only a source-language model, a parallel corpus and raw monolingual data for each language (if available).